Goto

Collaborating Authors

 fully-connected layer



The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes

Neural Information Processing Systems

We perform classification experiments for a large range of network backbones and several standard datasets on supervised learning and active learning.


MemoryFormer : Minimize Transformer Computation by Removing Fully-Connected Layers

Neural Information Processing Systems

In order to reduce the computational complexity of large language models, great efforts have been made to to improve the efficiency of transformer models such as linear attention and flash-attention. However, the model size and corresponding computational complexity are constantly scaled up in pursuit of higher performance. In this work, we present MemoryFormer, a novel transformer architecture which significantly reduces the computational complexity (FLOPs) from a new perspective. We eliminate nearly all the computations of the transformer model except for the necessary computation required by the multi-head attention operation. This is made possible by utilizing an alternative method for feature transformation to replace the linear projection of fully-connected layers. Specifically, we first construct a group of in-memory lookup tables that store a large amount of discrete vectors to replace the weight matrix used in linear projection.






A Appendix B General experimental setup All experimental results presented in Section 5 were evaluated on an HTCondor cluster (see [

Neural Information Processing Systems

This section summarizes the different algorithms used for the Section 5 numerical studies. For all other benchmarks we use max_depth =3 and num_boost_rounds = 50 . ' and activate the deterministic Default values are used for all other hyperparameters. Figure 1 presents results of benchmark problems with known constraints. Domain bounds without decimals indicate integer-valued variable types.